Goto

Collaborating Authors

 monotone submodular function










The Cost of Consistency: Submodular Maximization with Constant Recourse

Dütting, Paul, Fusco, Federico, Lattanzi, Silvio, Norouzi-Fard, Ashkan, Svensson, Ola, Zadimoghaddam, Morteza

arXiv.org Machine Learning

In this work, we study online submodular maximization, and how the requirement of maintaining a stable solution impacts the approximation. In particular, we seek bounds on the best-possible approximation ratio that is attainable when the algorithm is allowed to make at most a constant number of updates per step. We show a tight information-theoretic bound of $\tfrac{2}{3}$ for general monotone submodular functions, and an improved (also tight) bound of $\tfrac{3}{4}$ for coverage functions. Since both these bounds are attained by non poly-time algorithms, we also give a poly-time randomized algorithm that achieves a $0.51$-approximation. Combined with an information-theoretic hardness of $\tfrac{1}{2}$ for deterministic algorithms from prior work, our work thus shows a separation between deterministic and randomized algorithms, both information theoretically and for poly-time algorithms.


Reviews: Multi-objective Maximization of Monotone Submodular Functions with Cardinality Constraint

Neural Information Processing Systems

The paper proposes three efficient multiobjective submodular maximization algorithms. I like this paper, and has no clear objection to the paper. The techniques (using one-pass procedure for picking the heavy elements, separating MWU and the continuous greedy) are simple and effective, and seem to be applied for more general problems. The first paragraph (many well known ...) is arguable. There are many non-submodular combinatorial optimization problems.